PyVision Video Interface Demonstration

Demonstration of the various video classes and common interfaces for controlling and viewing output.


In [1]:
import pyvision as pv

In [2]:
vid_file = pv.TAZ_VIDEO  #built-in sample video in PyVision
vid = pv.Video(vid_file)
type(vid)


Out[2]:
pyvision.types.Video.Video

A video is an iterable object. You can play a video in two easy ways.

  • By iterating over the frames, displaying each to the same window.

  • The second is by using the built-in play() method, which comes with some nifty features.


In [3]:
for f in vid:
    f.show(delay=33, window="Demo")  #delay is ms to pause before next image is shown.

vid.play(delay=33, window="Demo")  #33 millisec delay is about 30 fps.


Out[3]:
92

The play() method allows the user to pause-and-play the video, step through frames, or abort early. It also annotates the frame number in the upper left, and can call a callback function.

The video will start paused if delay=0, so the user should highlight the playback window and press the "s" key to step to the next frame. When doing so, the iPython Notebook will display the text menu for the pause-and-play interface.


In [ ]:
vid.play(delay=0, window="Demo")

A VideoMontage is a set of videos that can be played at once in the same screen. We use a dictionary as the collection of videos, which are sorted based on the keys. All videos will play until completion. Shorter videos will halt on the last frame until the others finish.


In [4]:
import os
test_dir = os.path.dirname(vid_file)  #in the same directory as the TAZ video there are two others
vid_cars = pv.Video( os.path.join(test_dir,"toy_car.m4v"))  #toy cars on a carpet
vid_bug = pv.Video( os.path.join(test_dir,"BugsSample.m4v")) #a bug in grass
vid_taz = pv.Video(vid_file) #taz video
videoM = pv.VideoMontage( {1:vid_cars, 2:vid_bug, 3:vid_taz}, layout=(1,3), tileSize=(200,200))
videoM.play(delay=33, window="Demo")


All Videos in the Video Montage Have Completed.
Out[4]:
175

A VideoMontage object can be treated itself like a video, and thus embedded in another montage.


In [6]:
vid1 = pv.Video(vid_file)
vid2 = pv.Video(vid_file)
vid3 = pv.Video(vid_file)
videoM1 = pv.VideoMontage( {1:vid1, 2:vid2}, layout=(2,1), tileSize=(133,100))
videoM2 = pv.VideoMontage( {1:videoM1, 2:vid3}, layout=(1,2), tileSize=(266,200)) #a montage within a montage
videoM2.play(delay=33, window="Demo")


All Videos in the Video Montage Have Completed.
All Videos in the Video Montage Have Completed.
Out[6]:
94

There is an abstract VideoInterface class that defines the play() method, the pause-and-play interface, and represents videos as iterables.

The following classes all implement the VideoInterface:

  • Video
  • VideoMontage
  • VideoFromImages - For using a directory of images as a video
  • VideoFromFileList - For using a list of image file names for a video
  • VideoFromImageStack - For using a stack of numpy arrays as a video
  • Webcam

In [7]:
vid = pv.Webcam()
vid.play(window="Demo", delay=30)


PAUSED: Select <a>bort program, <q>uit playback, <c>ontinue playback, or <s>tep to next frame.
PAUSED: Select <a>bort program, <q>uit playback, <c>ontinue playback, or <s>tep to next frame.
Out[7]:
125

ImageBuffer is a useful class for buffering a set of pyvision images. ImageBuffers can be used to buffer video playback, to transform a set of grayscale images into an image stack, among other uses.


In [5]:
import pyvision as pv
import cv

imgBuffer = pv.ImageBuffer(N=25)
vid = pv.Video("./seq3.avi")
vid.play(window="Demo",delay=20,imageBuffer=imgBuffer)  #play() method supports a buffer


PAUSED: Select <a>bort program, <q>uit playback, <c>ontinue playback, or <s>tep to next frame.
Out[5]:
141

In [6]:
#the most recent images will be in the buffer
imgBuffer.show(window="Demo", delay=0) #simple way to view buffer
cv.DestroyWindow("Demo")  #kill the window if we don't need it anymore

In [7]:
#use an image montage if you want greater layout control
montage = imgBuffer.asMontage(layout=(3,5))
montage.show(window="Demo")
cv.DestroyWindow("Demo")

We can convert an ImageBuffer to a 3D numpy array representing a "stack" of grayscale images. This can be convenient for performing numpy matrix operations on all images in the stack.


In [13]:
#we can convert the buffer to an numpy grayscale image stack
(w,h) = imgBuffer[0].size 
imstack = imgBuffer.asStackBW(size=(w/2, h/2))
print type(imstack)
vid2 = pv.VideoFromImageStack(imstack)  #we can playback an image stack as a video
vid2.play(window="Demo", delay=60)


<type 'numpy.ndarray'>
Out[13]:
24

In [14]:
#because an image stack is a numpy array, operations like thresholding all
# images in the grayscale video is very easy.
imstack2 = (imstack < 100)*255.0
vid3 = pv.VideoFromImageStack(imstack2)
vid3.play(window="Demo", delay=60)


Out[14]:
24

In [15]:
cv.DestroyAllWindows()

In [ ]: